17 research outputs found
FLOWGEN: Fast and slow graph generation
Machine learning systems typically apply the same model to both easy and
tough cases. This is in stark contrast with humans, who tend to evoke either
fast (instinctive) or slow (analytical) thinking depending on the problem
difficulty, a property called the dual-process theory of mind. We present
FLOWGEN, a graph-generation model inspired by the dual-process theory of mind
that generates large graphs incrementally. Depending on the difficulty of
completing the graph at the current step, graph generation is routed to either
a fast (weaker) or a slow (stronger) model. These modules have identical
architectures, but vary in the number of parameters and consequently differ in
generative power. Experiments on real-world graphs show that ours can
successfully generate graphs similar to those generated by a single large
model, while being up to 2x faster.Comment: Accepted at Dynamic Neural Networks Workshop (DyNN), ICML 202
EntiTables: Smart Assistance for Entity-Focused Tables
Tables are among the most powerful and practical tools for organizing and
working with data. Our motivation is to equip spreadsheet programs with smart
assistance capabilities. We concentrate on one particular family of tables,
namely, tables with an entity focus. We introduce and focus on two specific
tasks: populating rows with additional instances (entities) and populating
columns with new headings. We develop generative probabilistic models for both
tasks. For estimating the components of these models, we consider a knowledge
base as well as a large table corpus. Our experimental evaluation simulates the
various stages of the user entering content into an actual table. A detailed
analysis of the results shows that the models' components are complimentary and
that our methods outperform existing approaches from the literature.Comment: Proceedings of the 40th International ACM SIGIR Conference on
Research and Development in Information Retrieval (SIGIR '17), 201
AutoMix: Automatically Mixing Language Models
Large language models (LLMs) are now available in various sizes and
configurations from cloud API providers. While this diversity offers a broad
spectrum of choices, effectively leveraging the options to optimize
computational cost and performance remains challenging. In this work, we
present AutoMix, an approach that strategically routes queries to larger LMs,
based on the approximate correctness of outputs from a smaller LM. Central to
AutoMix is a few-shot self-verification mechanism, which estimates the
reliability of its own outputs without requiring training. Given that
verifications can be noisy, we employ a meta verifier in AutoMix to refine the
accuracy of these assessments. Our experiments using LLAMA2-13/70B, on five
context-grounded reasoning datasets demonstrate that AutoMix surpasses
established baselines, improving the incremental benefit per cost by up to 89%.
Our code and data are available at https://github.com/automix-llm/automix.Comment: The first two authors contributed equally. Work started and partly
done during Aman's internship at Google. This version adds results on mixing
3 models, and will be presented at the workshop on robustness of
zero/few-shot learning in foundation models, Neurips 202
Bridging the Gap: A Survey on Integrating (Human) Feedback for Natural Language Generation
Many recent advances in natural language generation have been fueled by
training large language models on internet-scale data. However, this paradigm
can lead to models that generate toxic, inaccurate, and unhelpful content, and
automatic evaluation metrics often fail to identify these behaviors. As models
become more capable, human feedback is an invaluable signal for evaluating and
improving models. This survey aims to provide an overview of the recent
research that has leveraged human feedback to improve natural language
generation. First, we introduce an encompassing formalization of feedback, and
identify and organize existing research into a taxonomy following this
formalization. Next, we discuss how feedback can be described by its format and
objective, and cover the two approaches proposed to use feedback (either for
training or decoding): directly using the feedback or training feedback models.
We also discuss existing datasets for human-feedback data collection, and
concerns surrounding feedback collection. Finally, we provide an overview of
the nascent field of AI feedback, which exploits large language models to make
judgments based on a set of principles and minimize the need for human
intervention.Comment: Work in Progres
Self-Refine: Iterative Refinement with Self-Feedback
Like people, LLMs do not always generate the best text for a given generation
problem on their first try (e.g., summaries, answers, explanations). Just as
people then refine their text, we introduce SELF-REFINE, a framework for
similarly improving initial outputs from LLMs through iterative feedback and
refinement. The main idea is to generate an output using an LLM, then allow the
same model to provide multi-aspect feedback for its own output; finally, the
same model refines its previously generated output given its own feedback.
Unlike earlier work, our iterative refinement framework does not require
supervised training data or reinforcement learning, and works with a single
LLM. We experiment with 7 diverse tasks, ranging from review rewriting to math
reasoning, demonstrating that our approach outperforms direct generation. In
all tasks, outputs generated with SELF-REFINE are preferred by humans and by
automated metrics over those generated directly with GPT-3.5 and GPT-4,
improving on average by absolute 20% across tasks.Comment: Code, data, and demo at https://selfrefine.info